ai fear
The Hall of AI Fears and Hopes: Comparing the Views of AI Influencers and those of Members of the U.S. Public Through an Interactive Platform
Moreira, Gustavo, Bogucka, Edyta Paulina, Constantinides, Marios, Quercia, Daniele
AI development is shaped by academics and industry leaders - let us call them ``influencers'' - but it is unclear how their views align with those of the public. To address this gap, we developed an interactive platform that served as a data collection tool for exploring public views on AI, including their fears, hopes, and overall sense of hopefulness. We made the platform available to 330 participants representative of the U.S. population in terms of age, sex, ethnicity, and political leaning, and compared their views with those of 100 AI influencers identified by Time magazine. The public fears AI getting out of control, while influencers emphasize regulation, seemingly to deflect attention from their alleged focus on monetizing AI's potential. Interestingly, the views of AI influencers from underrepresented groups such as women and people of color often differ from the views of underrepresented groups in the public.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture > Yokohama (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (10 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Overview (0.92)
- Personal > Interview (0.67)
- Law (1.00)
- Education (1.00)
- Banking & Finance (1.00)
- (4 more...)
Harrison Ford shuts down AI fears, dismisses technology's power to 'steal my soul'
Harrison Ford isn't impressed by or afraid of artificial intelligence. In a recent interview with The Wall Street Journal, the "Captain America: Brave New World" star was asked if he was planning on securing control of his likeness from studios, and he brushed off the concern. "You don't need artificial intelligence to steal my soul. You can already do it for nickels and dimes with good ideas and talent," he told the outlet. Ford was referring to the 2024 video game "Indiana Jones and the Great Circle," with actor Troy Baker, who provided the voice and motion-capture performance for the character.
- North America > United States > Indiana (0.26)
- North America > United States > New York (0.05)
- Leisure & Entertainment (1.00)
- Media > News (0.53)
- Media > Film (0.51)
'We've discovered the secret of immortality. The bad news is it's not for us': why the godfather of AI fears for humanity
The first thing Geoffrey Hinton says when we start talking, and the last thing he repeats before I turn off my recorder, is that he left Google, his employer of the past decade, on good terms. "I have no objection to what Google has done or is doing, but obviously the media would love to spin me as'a disgruntled Google employee'. It's an important clarification to make, because it's easy to conclude the opposite. After all, when most people calmly describe their former employer as being one of a small group of companies charting a course that is alarmingly likely to wipe out humanity itself, they do so with a sense of opprobrium. But to listen to Hinton, we're about to sleepwalk towards an existential threat to civilisation without anyone involved acting maliciously at all. Known as one of three "godfathers of AI", in 2018 Hinton won the ACM Turing award – the Nobel prize of computer scientists for his work on "deep learning". A cognitive psychologist and computer scientist by training, he wasn't motivated by a desire to radically improve technology: instead, it was to understand more about ourselves. "For the last 50 years, I've been trying to make computer models that can learn stuff a bit like the way the brain learns it, in order to understand better how the brain is learning things," he tells me when we meet in his sister's house in north London, where he is staying (he usually resides in Canada). Looming slightly over me – he prefers to talk standing up, he says – the tone is uncannily reminiscent of a university tutorial, as the 75-year-old former professor explains his research history, and how it has inescapably led him to the conclusion that we may be doomed. In trying to model how the human brain works, Hinton found himself one of the leaders in the field of "neural networking", an approach to building computer systems that can learn from data and experience. Until recently, neural nets were a curiosity, requiring vast computer power to perform simple tasks worse than other approaches. But in the last decade, as the availability of processing power and vast datasets has exploded, the approach Hinton pioneered has ended up at the centre of a technological revolution. "In trying to think about how the brain could implement the algorithm behind all these models, I decided that maybe it can't – and maybe these big models are actually much better than the brain," he says. A "biological intelligence" such as ours, he says, has advantages. It runs at low power, "just 30 watts, even when you're thinking", and "every brain is a bit different". That means we learn by mimicking others. But that approach is "very inefficient" in terms of information transfer. Digital intelligences, by contrast, have an enormous advantage: it's trivial to share information between multiple copies. "You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies.
- North America > United States (0.29)
- North America > Canada (0.25)
- Europe > United Kingdom > England > Greater London > London (0.25)
- Government (0.70)
- Information Technology > Services (0.35)
The Download: Geoffrey Hinton's AI fears, and decoding our thoughts
Geoffrey Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence. But after a decade at Google, he is stepping down to focus on new concerns he now has about AI. Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in. Will Douglas Heaven, our senior AI editor, sat down with Hinton at his north London home just four days before the bombshell announcement of his departure. Hinton explained his belief that machines are on track to be a lot smarter than he thought they'd be--and why he's scared about how that might play out.
- Health & Medicine > Therapeutic Area > Neurology (0.37)
- Health & Medicine > Diagnostic Medicine > Imaging (0.37)
The Download: China's retro AI photos, and experts' AI fears
Across social media, a number of creators are generating nostalgic photographs of China with the help of AI. Even though these images get some details wrong, they are realistic enough to trick and impress many of their followers. The pictures look sophisticated in terms of definition, sharpness, saturation, and color tone. Their realism is partly down to a recent major update of image-making artificial-intelligence program Midjourney that was released in mid-March, which is better not only at generating human hands but also at simulating various photography styles. It's still relatively easy, even for untrained eyes, to tell that the photos are generated by an AI. But for some creators, their experiments are more about trying to recall a specific era in time than trying to trick their audience.
Pseudo AI Bias
Zhai, Xiaoming, Krajcik, Joseph
Pseudo Artificial Intelligence bias (PAIB) is broadly disseminated in the literature, which can result in unnecessary AI fear in society, exacerbate the enduring inequities and disparities in access to and sharing the benefits of AI applications, and waste social capital invested in AI research. This study systematically reviews publications in the literature to present three types of PAIBs identified due to: a) misunderstandings, b) pseudo mechanical bias, and c) over-expectations. We discussed the consequences of and solutions to PAIBs, including certifying users for AI applications to mitigate AI fears, providing customized user guidance for AI applications, and developing systematic approaches to monitor bias. We concluded that PAIB due to misunderstandings, pseudo mechanical bias, and over-expectations of algorithmic predictions is socially harmful.
- Oceania > New Zealand (0.04)
- North America > United States > Michigan > Ingham County > Lansing (0.04)
- North America > United States > Michigan > Ingham County > East Lansing (0.04)
- (3 more...)
- Law (0.95)
- Education > Educational Setting (0.46)
- Health & Medicine > Diagnostic Medicine (0.46)
What AI fears the most
But what would AI be most concerned about when thinking of its future? What technology wants is a perplexing question ever since it has been asked directly by Kevin Kelly. Yet even before him, many have pondered over the answer to this question, that only the most self-absorbed species could ask of their imperfect creation. But is technology truly our creation? What is the essence of technology?
'They wanted me gone': Edward Snowden tells of whistleblowing, his AI fears and six years in Russia
Fri 13 Sep 2019 17.00 BST Last modified on Fri 13 Sep 2019 17.00 BST The world's most famous whistleblower, Edward Snowden, says he has detected a softening in public hostility towards him in the US over his disclosure of top-secret documents that revealed the extent of the global surveillance programmes run by American and British spy agencies. In an exclusive two-hour interview in Moscow to mark the publication of his memoirs, Permanent Record, Snowden said dire warnings that his disclosures would cause harm had not come to pass, and even former critics now conceded "we live in a better, freer and safer world" because of his revelations. In the book, Snowden describes in detail for the first time his background, and what led him to leak details of the secret programmes being run by the US National Security Agency (NSA) and the UK's secret communication headquarters, GCHQ. He describes the 18 years since the September 11 attacks as "a litany of American destruction by way of American self-destruction, with the promulgation of secret policies, secret laws, secret courts and secret wars". Snowden also said: "The greatest danger still lies ahead, with the refinement of artificial intelligence capabilities, such as facial and pattern recognition. "An AI-equipped surveillance camera would be not a mere recording device, but could be made into something closer to an automated police officer." He is concerned the US and other governments, aided by the big internet companies, are moving towards creating a permanent record of everyone on earth, recording the whole of their daily lives. While Snowden feels justified in what he did six years ago, he told the Guardian he was reconciled to being in Russia for years to come and was planning for his future on that basis. He reveals he secretly married his partner, Lindsay Mills, two years ago in a Russian courthouse. While he would rather be in the US or somewhere like Germany, he is relaxed in Russia, now able to lead a more or less normal daily life. He is less fearful than when he first arrived in 2013, when he felt lonely, isolated and paranoid that he could be targeted in the streets by US agents seeking retribution. "I was very much a person the most powerful government in the world wanted to go away.
- Asia > Russia (1.00)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.27)
- Europe > Germany (0.25)
- (12 more...)
How to Get Your Head Around AI Fears and Stay Relevant IRIS
You've fostered a client relationship over years, sometimes even decades, working with the client side by side to grow their portfolio--and grow your AUM. But when your client dies and the next generation takes over, you lose more than a valued client. You lose their portfolio as well. How can you retain those assets without having to prove your worth over and over again? The key to retaining next-gen clients is instilling a new level of relationship building across your business, and it begins long before the transfer of wealth actually takes place.
AI fears abating among UK consumers, suggests OpenText survey
UK citizens appear to be losing their fear of artificial intelligence (AI) technology, according to survey research by OpenText. The enterprise information management software supplier has repeated a survey it conducted in 2017 among 2,000 British consumers. While the 2017 survey revealed that a quarter of the UK consumers asked believed their job could be replaced by AI software in the next 10 years, this dropped to around one-in-five (21%) in the 2018 survey. You forgot to provide an Email Address. This email address doesn't appear to be valid.
- Europe > United Kingdom (0.25)
- North America > United States (0.05)